A Robust Procedure for On-Line Learning of Neural Networks in Control Strategy Development

2001 ◽  
Vol 73 (6) ◽  
pp. 655-656
Author(s):  
L. Ender ◽  
R. Maciel Filho
1999 ◽  
Vol 10 (2) ◽  
pp. 253-271 ◽  
Author(s):  
P. Campolucci ◽  
A. Uncini ◽  
F. Piazza ◽  
B.D. Rao

2002 ◽  
Vol 124 (3) ◽  
pp. 364-374 ◽  
Author(s):  
Alexander G. Parlos ◽  
Sunil K. Menon ◽  
Amir F. Atiya

On-line filtering of stochastic variables that are difficult or expensive to directly measure has been widely studied. In this paper a practical algorithm is presented for adaptive state filtering when the underlying nonlinear state equations are partially known. The unknown dynamics are constructively approximated using neural networks. The proposed algorithm is based on the two-step prediction-update approach of the Kalman Filter. The algorithm accounts for the unmodeled nonlinear dynamics and makes no assumptions regarding the system noise statistics. The proposed filter is implemented using static and dynamic feedforward neural networks. Both off-line and on-line learning algorithms are presented for training the filter networks. Two case studies are considered and comparisons with Extended Kalman Filters (EKFs) performed. For one of the case studies, the EKF converges but it results in higher state estimation errors than the equivalent neural filter with on-line learning. For another, more complex case study, the developed EKF does not converge. For both case studies, the off-line trained neural state filters converge quite rapidly and exhibit acceptable performance. On-line training further enhances filter performance, decoupling the eventual filter accuracy from the accuracy of the assumed system model.


2000 ◽  
Vol 51 (6) ◽  
pp. 691-697 ◽  
Author(s):  
A. C. C Coolen ◽  
D Saad ◽  
Yuan-Sheng Xiong

Sign in / Sign up

Export Citation Format

Share Document